Goto

Collaborating Authors

 adversarial machine learning


Adversarial Machine Learning: Attacks, Defenses, and Open Challenges

Jha, Pranav K

arXiv.org Artificial Intelligence

Adversarial Machine Learning (AML) addresses vulnerabilities in AI systems where adversaries manipulate inputs or training data to degrade performance. This article provides a comprehensive analysis of evasion and poisoning attacks, formalizes defense mechanisms with mathematical rigor, and discusses the challenges of implementing robust solutions in adaptive threat models. Additionally, it highlights open challenges in certified robustness, scalability, and real-world deployment.


AdvSecureNet: A Python Toolkit for Adversarial Machine Learning

Catal, Melih, Günther, Manuel

arXiv.org Artificial Intelligence

Machine learning models are vulnerable to adversarial attacks. Several tools have been developed to research these vulnerabilities, but they often lack comprehensive features and flexibility. We introduce AdvSecureNet, a PyTorch based toolkit for adversarial machine learning that is the first to natively support multi-GPU setups for attacks, defenses, and evaluation. It is the first toolkit that supports both CLI and API interfaces and external YAML configuration files to enhance versatility and reproducibility. The toolkit includes multiple attacks, defenses and evaluation metrics. Rigiorous software engineering practices are followed to ensure high code quality and maintainability.


Adversarial Machine Learning in Latent Representations of Neural Networks

Zhang, Milin, Abdi, Mohammad, Restuccia, Francesco

arXiv.org Artificial Intelligence

Distributed deep neural networks (DNNs) have been shown to reduce the computational burden of mobile devices and decrease the end-to-end inference latency in edge computing scenarios. While distributed DNNs have been studied, to the best of our knowledge the resilience of distributed DNNs to adversarial action still remains an open problem. In this paper, we fill the existing research gap by rigorously analyzing the robustness of distributed DNNs against adversarial action. We cast this problem in the context of information theory and introduce two new measurements for distortion and robustness. Our theoretical findings indicate that (i) assuming the same level of information distortion, latent features are always more robust than input representations; (ii) the adversarial robustness is jointly determined by the feature dimension and the generalization capability of the DNN. To test our theoretical findings, we perform extensive experimental analysis by considering 6 different DNN architectures, 6 different approaches for distributed DNN and 10 different adversarial attacks to the ImageNet-1K dataset. Our experimental results support our theoretical findings by showing that the compressed latent representations can reduce the success rate of adversarial attacks by 88% in the best case and by 57% on the average compared to attacks to the input space.


Adversarial Machine Learning for Social Good: Reframing the Adversary as an Ally

Al-Maliki, Shawqi, Qayyum, Adnan, Ali, Hassan, Abdallah, Mohamed, Qadir, Junaid, Hoang, Dinh Thai, Niyato, Dusit, Al-Fuqaha, Ala

arXiv.org Artificial Intelligence

Deep Neural Networks (DNNs) have been the driving force behind many of the recent advances in machine learning. However, research has shown that DNNs are vulnerable to adversarial examples -- input samples that have been perturbed to force DNN-based models to make errors. As a result, Adversarial Machine Learning (AdvML) has gained a lot of attention, and researchers have investigated these vulnerabilities in various settings and modalities. In addition, DNNs have also been found to incorporate embedded bias and often produce unexplainable predictions, which can result in anti-social AI applications. The emergence of new AI technologies that leverage Large Language Models (LLMs), such as ChatGPT and GPT-4, increases the risk of producing anti-social applications at scale. AdvML for Social Good (AdvML4G) is an emerging field that repurposes the AdvML bug to invent pro-social applications. Regulators, practitioners, and researchers should collaborate to encourage the development of pro-social applications and hinder the development of anti-social ones. In this work, we provide the first comprehensive review of the emerging field of AdvML4G. This paper encompasses a taxonomy that highlights the emergence of AdvML4G, a discussion of the differences and similarities between AdvML4G and AdvML, a taxonomy covering social good-related concepts and aspects, an exploration of the motivations behind the emergence of AdvML4G at the intersection of ML4G and AdvML, and an extensive summary of the works that utilize AdvML4G as an auxiliary tool for innovating pro-social applications. Finally, we elaborate upon various challenges and open research issues that require significant attention from the research community.


The race to robustness: exploiting fragile models for urban camouflage and the imperative for machine learning security

Farlow, Harriet, Garratt, Matthew, Mount, Gavin, Lynar, Tim

arXiv.org Artificial Intelligence

Adversarial Machine Learning (AML) represents the ability to disrupt Machine Learning (ML) algorithms through a range of methods that broadly exploit the architecture of deep learning optimisation. This paper presents Distributed Adversarial Regions (DAR), a novel method that implements distributed instantiations of computer vision-based AML attack methods that may be used to disguise objects from image recognition in both white and black box settings. We consider the context of object detection models used in urban environments, and benchmark the MobileNetV2, NasNetMobile and DenseNet169 models against a subset of relevant images from the ImageNet dataset. We evaluate optimal parameters (size, number and perturbation method), and compare to state-of-the-art AML techniques that perturb the entire image. We find that DARs can cause a reduction in confidence of 40.4% on average, but with the benefit of not requiring the entire image, or the focal object, to be perturbed. The DAR method is a deliberately simple approach where the intention is to highlight how an adversary with very little skill could attack models that may already be productionised, and to emphasise the fragility of foundational object detection models. We present this as a contribution to the field of ML security as well as AML. This paper contributes a novel adversarial method, an original comparison between DARs and other AML methods, and frames it in a new context - that of urban camouflage and the necessity for ML security and model robustness.


Adversarial machine learning explained: How attackers disrupt AI and ML systems

#artificialintelligence

As more companies roll out artificial intelligence (AI) and machine learning (ML) projects, securing them becomes more important. A report released by IBM and Morning Consult in May stated that of more than 7,500 global businesses, 35% of companies are already using AI, up 13% from last year, while another 42% are exploring it. However, almost 20% of companies say that they were having difficulties securing data and that it is slowing down AI adoption. In a survey conducted last spring by Gartner, security concerns were a top obstacle to adopting AI, tied for first place with the complexity of integrating AI solutions into existing infrastructure. According to a paper Microsoft released last spring, 90% of organizations aren't ready to defend themselves against adversarial machine learning.


What is Adversarial Machine Learning? - KDnuggets

#artificialintelligence

With the continuous rise in Machine Learning (ML), our society becomes heavily reliant on its applications in the real world. However the more dependent we become on Machine Learning models, the more vulnerabilities on how to defeat these models. The dictionary definition of an "adversary" is: "one that contends with, opposes, or resists" In the Cybersecurity sector, adversarial machine learning attempts to deceive and trick models by creating unique deceptive inputs, to confuse the model resulting in a malfunction in the model. Adversaries may input data that have an intention to compromise or alter the output and exploit its vulnerabilities. We are unable to identify these inputs through the human eye, however, it causes the model to fail.


What is Adversarial Machine Learning?

#artificialintelligence

Machine learning models are complicated things and, often, we can have a poor understanding of how they make predictions. This can leave hidden weaknesses that could be exploited by attackers. They could trick the model into making incorrect predictions or give away sensitive information. Fake data could even be used to corrupt models without us knowing. The field of adversarial machine learning aims to address these weaknesses.


What is Adversarial Machine Learning?

#artificialintelligence

CIO Insight offers thought leadership and best practices in the IT security and management industry while providing expert recommendations on software solutions for IT leaders. It is the trusted resource for security professionals who need to maintain regulatory compliance for their teams and organizations. CIO Insight is an ideal website for IT decision makers, systems integrators and administrators, and IT managers to stay informed about emerging technologies, software developments and trends in the IT security and management industry.


Everything You Need to Know About Adversarial Machine Learning

#artificialintelligence

Machine learning is a key aspect of Artificial Intelligence. However, one area that has always been an issue to worry about is adversarial attacks. It is because of this that the models trained to work in a particular way fail to do so and act in undesired ways. Computer vision is one of those areas that has grabbed eyeballs from everywhere around. This is the area where the AI systems deployed aid in processing the visual data.